Advanced Algorithm and Parallel Programming (AAPP) — Appunti TiTilda

Indice

Random Access Machine (RAM)

The random access memory is a theoretical model of computation that is used to analyze sequential algorithms.

RAM Complexity Analysis

The complexity of an algorithm is measured by:

Parallel Random Access Machine (PRAM)

The parallel random access machine is a theoretical model of computation that is used to analyze parallel algorithms.

M' = \langle M, X, Y, A \rangle

Each time step, each processor can simultaneously perform:

PRAM Classification

PRAMs are classified on how they handle concurrent access to the shared memory.

Read from the shared memory can happen in two ways:

Write in the shared memory can happen in two ways:

The classification is done by combining the two types of access.

PRAM Complexity

The complexity of a PRAM algorithm is measured by:

Metric Symbol Formula Context / Meaning
Elapsed Time T_p(n) - The number of steps (time) to complete the algorithm of size n using p processors.
Sequential Time T^*(n) - The time complexity of the best known sequential algorithm for the same problem. \mathbf{T^*(n) \neq T_1(n)}.
Speedup SU_p(n) \frac{T^*(n)}{T_p(n)} Measures how much faster the parallel algorithm is compared to the best sequential one. Ideally, SU_p(n) \approx p (linear speedup).
Efficiency E_p(n) \frac{SU_p(n)}{p} = \frac{T^*(n)}{p \cdot T_p(n)} Measures the average utilization of the p processors. 0 \leq E_p(n) \leq 1. An efficiency close to 1 is considered optimal.
Cost / Work C_p(n) = W(n) p \cdot T_p(n) The total number of operations performed by all p processors during the execution.

Scaling

The scaling is the ability of an algorithm to efficiently use an increasing number of processors.

The scaling is the relationship between three parameters:

The scaling is classified in two types:

Strong Scaling

In strong scaling the problem size (n) is kept constant while the number of processors (p) is increased. The goal is to reduce the time (T) taken to complete the computation.

To analyze strong scaling, we use Amdahl’s Law:

SU_p(n) = \frac{1}{(1 - f) + \frac{f}{p}}

Due to the sequential part, there is a limit to the speedup that can be achieved by adding more processors.

\lim_{p \to \infty} SU_p(n) = \frac{1}{1 - f}

This means that even with an infinite number of processors, the speedup is limited by the sequential portion of the algorithm.

Ideally, for strong scaling, we want:

T \propto \frac{1}{p}

Weak Scaling

In weak scaling, the problem size (n) is increased proportionally with the number of processors (p), keeping the workload per processor constant. The goal is to maintain a constant time (T) as both n and p increase.

To analyze weak scaling, we use Gustafson’s Law:

SU_p(n) = s + P \cdot (1 - s)

This law suggests that as we increase the number of processors, the speedup can grow linearly with p, assuming the parallelizable portion is significant.

Ideally, for weak scaling, we want:

T \propto P

Ultima modifica:
Scritto da: Andrea Lunghi